Goto

Collaborating Authors

 zt 0





Algorithm3Primal-DualMethod Initializetheparticles{θi,0}ni=1 andλ0

Neural Information Processing Systems

So we can check that ddtE(qt,λt) (qt,λt) in both cases. Combing the two cases yield the result. Pm i=1N(θ;µi,σ2i) where m is fixed to5 in all the experiments. Monotonic Bayesian Neural Networks In this experiment, we use the COMPAS dataset (J. The task istopredict whether the individual will commit acrime againin2years.



E h dYθt +dbYφt |X0=x i, (24a) logρT(θ;x) LIPF(θ): =E[ Yθs + bYφs | Xs =x,s=0 ] = Z

Neural Information Processing Systems

SB-FBSDE isanewclass ofgenerativemodels that, inspiring bytherecent advance of understanding deep learning through the optimal control perspective [61-63], adopts Lemma 5 to generalize the score-based diffusion models.




Appendixfor RiemannianContinuousNormalizingFlows

Neural Information Processing Systems

In the following, we provide a brief overview of Riemannian geometry and constant curvature manifolds, specifically the Poincaré ball and the hypersphere models. Sphere In the two-dimensional settingd = 2, we rely on polar coordinates to parametrize the sphere S2. In the following subsection we remind that this regularization term can also be motivated from an estimator'svarianceperspective. 5 D.2 Frobeniusnorm Hutchinson'sestimator Hutchinson'sestimator(Hutchinson,1990)isasimple waytoobtain a stochastic estimate ofthetrace ofamatrix. The variance of this estimator thus depends on the Frobenius norm of the vector's field Jacobian Thenγ(tn) is also a Cauchy sequence by Equation 16. So for every sequence (tn) in (a,b) that converges tob, we have that(γ(tn)) converges top.